1.32 Neural Computation Theories of Learning
نویسنده
چکیده
This postulate was experimentally confirmed in the hippocampus, where high-frequency stimulation (HFS) of a presynaptic neuron causes long-term potentiation (LTP) in the synapses connecting it to the postsynaptic neurons (Bliss and Lomo, 1973). LTP takes place only if the postsynaptic cell is also active and sufficiently depolarized (Kelso et al., 1986). In many brain areas, this is due to the N-methyl-D-aspartate (NMDA) type of glutamate receptor, which opens when glutamate is bound to the receptor and the postsynaptic cell is sufficiently depolarized at the same time. Hebb’s rule has served as the starting point for studying the learning capabilities of artificial neural networks (ANN) and for the theoretical analysis and computational modeling of biological neural systems (Hertz et al., 1991). The architecture of an ANN determines its behavior and learning capabilities. The architecture of a network is defined by the connections among the artificial neural units and the function that each unit performs on its inputs. Two general classes of network models have feedforward and recurrent architectures. The simplest feedforward network has one layer of input units and one layer of output units (Fig. 1, left). All connections are unidirectional and project from the input units to the output units. Perceptron is an example of such a simple feedforward network (Rosenblatt, 1958). It can learn to classify patterns from examples. It turned out that perceptron can only classify patterns that are linearly separable – that is, if the positive patterns can be separated from all negative patterns by a plane in the space of input patterns. More powerful multilayer feedforward networks can discriminate patterns that are not linearly separable. In a multilayer feedforward network, the “hidden” layers of units between the input and output layers allow more flexibility in learning features (Rumelhart et al., 1986). Multilayer feedforward networks can solve some difficult problems (Rumelhart and McClelland, 1986) and underlie current rapid development of the field of deep learning in machine learning (LeCun et al., 2015). In contrast to strictly feedforward network models, recurrent networks also have feedback connections among units in the network (Fig. 1, right). A simple recurrent network can have a uniform architecture such as all-to-all connectivity combined with symmetrical weights between units as in Hopfield network (Hopfield, 1982), or it can be a network with specific connections designed to model a particular biological system (Sporns, 2010).
منابع مشابه
Learning Curve Consideration in Makespan Computation Using Artificial Neural Network Approach
This paper presents an alternative method using artificial neural network (ANN) to develop a scheduling scheme which is used to determine the makespan or cycle time of a group of jobs going through a series of stages or workstations. The common conventional method uses mathematical programming techniques and presented in Gantt charts forms. The contribution of this paper is in three fold. First...
متن کاملIntegration of Reinforcement Learning and Optimal Decision-Making Theories of the Basal Ganglia
This article seeks to integrate two sets of theories describing action selection in the basal ganglia: reinforcement learning theories describing learning which actions to select to maximize reward and decision-making theories proposing that the basal ganglia selects actions on the basis of sensory evidence accumulated in the cortex. In particular, we present a model that integrates the actor-c...
متن کاملA Theory for Learning by Weight Flow on Stiefel-Grassman Manifold
Recently we introduced the concept of neural networks learning on Stiefel-Grassman manifold for MLP-like networks. Contributions of other authors have also appeared in the scientiic literature about this topic. The aim of this paper is to present a general theory for it, and to illustrate how existing theories may be explained within the general framework proposed here.
متن کاملInferring Mesoscale Models of Neural Computation
Recent years have seen dramatic progress in the development of techniques for measuring the activity and connectivity of large populations of neurons in the brain. However, as these techniques grow ever more powerful—allowing us to even contemplate measuring every neuron in entire brains—a new problem arises: how do we make sense of the mountains of data that these techniques produce? Here, we ...
متن کاملApplication of Learning Theories in Clinical Education
Introduction: The purpose of education is learning. Several theories have been raised about learning, which have tried to explain how learning occurs. They help teachers to choose teaching methods, prepare learning environment and determine students' activities. Given the importance of learning theories in education, this study aimed to review application of learning theories in nursing educati...
متن کاملOverview of learning theories and its applications in medical education
Introduction: The purpose of teaching is learning, and learning is related to learning theories. These theories describe and explain how people learn. According to various experts' opinion about learning, many theories emerged. The paper reviewed three major approaches include behaviorism, cognitive and constructive learning and its educational applications in medical science. Methods: this pa...
متن کامل